Compare Page

Interpretability

Characteristic Name: Interpretability
Dimension: Usability and Interpretability
Description: Data should be interpretable
Granularity: Information object
Implementation Type: Process-based approach
Characteristic Type: Usage

Verification Metric:

The number of tasks failed or under performed due to the lack of interpretability of data
The number of complaints received due to the lack of interpretability of data

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Standardise the interpretation process by clearly stating the criteria for interpreting results so that an interpretation on one dataset is reproducible (1) 10% drop in production efficiency is a severe decline which needs quick remedial actions
Facilitate the interaction process based on users' task at hand (1) A traffic light system to indicate the efficiency of a production line to the workers, a detail efficiency report to the production manage, a concise efficiency report for production line supervisors
Design the structure of information in such a way that further format conversions are not necessary for interpretations. (1) A rating scale of (poor good excellent ) is better than (1,2,3) for rate a service level
Ensure that information is consistent between units of analysis (organisations, geographical areas, populations in concern etc.) and over time, allowing comparisons to be made. (1) Number of doctors per person is used to compare the health facilities between regions.
(2) Same populations are used over the time to analyse the epidemic growths over the tim
Use appropriate visualisation tools to facilitate interpretation of data through comparisons and contrasts (1) Usage of tree maps , Usage of bar charts, Usage of line graphs

Validation Metric:

How mature is the process to maintain the interpretability of data

These are examples of how the characteristic might occur in a database.

Example: Source:
when an analyst has data with freshness metric equals to 0, does it mean to have fresh data at hand? What about freshness equals to 10 (suppose, we do not stick to the notion proposed in [23])? Is it even fresher? Similar issues may arise with the notion of age: e.g., with age A(e) = 0, we cannot undoubtedly speak about positive or negative data characteristic because of a semantic meaning of “age” that mostly corresponds to a neutral notion of “period of time” O. Chayka, T. Palpanas, and P. Bouquet, “Defining and Measuring Data-Driven Quality Dimension of Staleness”, Trento: University of Trento, Technical Report # DISI-12-016, 2012.
Consider a database containing orders from customers. A practice for handling complaints and returns is to create an “adjustment” order for backing out the original order and then writing a new order for the corrected information if applicable. This procedure assigns new order numbers to the adjustment and replacement orders. For the accounting department, this is a high-quality database. All of the numbers come out in the wash. For a business analyst trying to determine trends in growth of orders by region, this is a poor-quality database. If the business analyst assumes that each order number represents a distinct order, his analysis will be all wrong. Someone needs to explain the practice and the methods necessary to unravel the data to get to the real numbers (if that is even possible after the fact). J. E. Olson, “Data Quality: The Accuracy Dimension”, Morgan Kaufmann Publishers, 9 January 2003.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
Comparability of data refers to the extent to which data is consistent between organisations and over time allowing comparisons to be made. This includes using equivalent reporting periods. HIQA 2011. International Review of Data Quality Health Information and Quality Authority (HIQA), Ireland. http://www.hiqa.ie/press-release/2011-04-28-international-review-data-quality.
Data is not ambiguous if it allows only one interpretation – anti-example: Song.composer = ‘Johann Strauss’ (father or son?). KIMBALL, R. & CASERTA, J. 2004. The data warehouse ETL toolkit: practical techniques for extracting. Cleaning, Conforming, and Delivering, Digitized Format, originally published.
Comparability aims at measuring the impact of differences in applied statistical concepts and measurement tools/procedures when statistics are compared between geographical areas, non-geographical domains, or over time. LYON, M. 2008. Assessing Data Quality ,
Monetary and Financial Statistics.
Bank of England. http://www.bankofengland.co.uk/
statistics/Documents/ms/articles/art1mar08.pdf.
The most important quality characteristic of a format is its appropriateness. One format is more appropriate than another if it is better suited to users’ needs. The appropriateness of the format depends upon two factors: user and medium used. Both are of crucial importance. The abilities of human users and computers to understand data in different formats are vastly different. For example, the human eye is not very good at interpreting some positional formats, such as bar codes, although optical scanning devices are. On the other hand, humans can assimilate much data from a graph, a format that is relatively hard for a computer to interpret. Appropriateness is related to the second quality dimension, interpretability. REDMAN, T. C. 1997. Data quality for the information age, Artech House, Inc.

 

Accuracy to reality

Characteristic Name: Accuracy to reality
Dimension: Accuracy
Description: Data should truly reflect the real world
Granularity: Record
Implementation Type: Process-based approach
Characteristic Type: Usage

Verification Metric:

The number of tasks failed or under performed due to lack of accuracy to reality
The number of complaints received due to lack of accuracy to reality

GuidelinesExamplesDefinitons

The implementation guidelines are guidelines to follow in regard to the characteristic. The scenarios are examples of the implementation

Guidelines: Scenario:
Continuously evaluate if the existing data model is sufficient to represent the real world as required by the organisational need and do the necessary amendments to the data model if needed. (1) A student who received a concession travel card is not eligible for a concession fare if he terminates his candidature before completion of the course. Hence the data model should have an extra attribute as "current status of candidature"
Perform regular audits on mission critical data to verify that every record has a meaningful existence in the reality which is useful for the organisation (1) All customers existing in the customer master file actually a customer in the customer space open for the organisation. (non customers are not in the customer file) (2) "Greg Glass" is recorded as a glass work company but in fact they are opticians
(3) A person's personal details taken from his educational profile may not be a correct reality for his insurance profile even though the information is
Perform regular audits on mission critical data to verify that every record has a unique existence in the reality (1) It is difficult to find out that the professor "Andrew" is from Colombia university or from the university of Queensland
Ensure that Information available in the system is accurate in the context of a particular activity or event (1) The driver details taken from vehicle registration may not be accurate in the case of finding the real person who drive the vehicle when an accident is caused

Validation Metric:

How mature is the process to ensure the accuracy to reality

These are examples of how the characteristic might occur in a database.

Example: Source:
if the name of a person is John, the value v = John is correct, while the value v = Jhn is incorrect C. Batini and M, Scannapieco, “Data Quality: Concepts, Methodologies, and Techniques”, Springer, 2006.
Percent of values that are correct when compared to the actual value. For example, M=Male when the subject is Male. P. Cykana, A. Paul, and M. Stern, “DoD Guidelines on Data Quality Management” in MIT Conference on Information Quality - IQ, 1996, pp. 154-171.
an EMPLOYEE entity (identified by the Employee-Number

314159) and the attribute Year-of-Birth. If the value of Year-of-Birth for employee 314159 is the year the employee was born, the datum is correct.

C. Fox, A. Levitin, and T. Redman, “The Notion of Data and Its Quality Dimensions” in Journal Information Processing and Management: an International Journal archive, Volume 30 Issue 1, Jan-Feb 1994, 1992, pp. 9-19.
Consider a database that contains names, addresses, phone numbers, and e- mail addresses of physicians in the state of Texas. This database is known to have a number of errors: some records are wrong, some are missing, and some are obsolete. If you compare the database to the true population of physicians, it is expected to be 85% accurate. If this database is to be used for the state of Texas to notify physicians of a new law regarding assisted suicide, it would certainly be considered poor quality. In fact, it would be dangerous to use it for that intended purpose.

24

2.1 Data Quality Definitions 25

If this database were to be used by a new surgical device manufacturer to find potential customers, it would be considered high quality. Any such firm would be delighted to have a potential customer database that is 85% accurate. From it, they could conduct a telemarketing campaign to identify real sales leads with a completely acceptable success rate. The same database: for one use it has poor data quality, and for another it has high data quality.

J. E. Olson, “Data Quality: The Accuracy Dimension”, Morgan Kaufmann Publishers, 9 January 2003.
The patient’s identification details are correct and uniquely identify the patient. P. J. Watson, “Improving Data Quality: A Guide for Developing Countries”, World Health Organization, 2003.

The Definitions are examples of the characteristic that appear in the sources provided.

Definition: Source:
Determines the extent to which data objects correctly represent the real-world values for which they were designed. For example, the sales orders for the Northeast region must be assigned a Northeast sales representative. D. McGilvray, “Executing Data Quality Projects: Ten Steps to Quality Data and Trusted Information”, Morgan Kaufmann Publishers, 2008.
The data value correctly reflects the real-world condition. B. BYRNE, J. K., D. MCCARTY, G. SAUTER, H. SMITH, P WORCESTER 2008. The information perspective of SOA design Part 6:The value of applying the data quality analysis pattern in SOA. IBM corporation.
The data correctly reflects the Characteristics of a Real-World Object or Event being described. Accuracy and Precision represent the highest degree of inherent Information Quality possible. ENGLISH, L. P. 2009. Information quality applied: Best practices for improving business information, processes and systems, Wiley Publishing.
Is the information precise enough and close enough to reality? EPPLER, M. J. 2006. Managing information quality: increasing the value of information in knowledge-intensive products and processes, Springer.
1) Each identifiable data unit maps to the correct real-world phenomenon.

2) Non-identifying (i.e. non-key) attribute values in an identifiable data unit match the property values for the represented real-world phenomenon.

3) Each identifiable data unit represents at least one specific real-world phenomenon.

4) Each identifiable data unit represents at most one specific real-world phenomenon.

PRICE, R. J. & SHANKS, G. Empirical refinement of a semiotic information quality framework. System Sciences, 2005. HICSS'05. Proceedings of the 38th Annual Hawaii International Conference on, 2005. IEEE, 216a-216a.
1) The degree to which an information object correctly represents another information object, process, or phenomenon in the context of a particular activity or culture.

2) Closeness of agreement between a property value and the true value (value that characterizes a characteristic perfectly defined in the conditions that exists when the characteristic is considered.

3) The extent to which the correctness of information is verifiable or provable in the context of a particular activity.

STVILIA, B., GASSER, L., TWIDALE, M. B. & SMITH, L. C. 2007. A framework for information quality assessment. Journal of the American Society for Information Science and Technology, 58, 1720-1733.